Conversation
| auto query_frame = relay_impl_.getRelayQueryMessage(); | ||
| std::lock_guard<std::mutex> lock(query_mutex_); | ||
|
|
||
| // If a request is already in-flight, check if it has timed out |
There was a problem hiding this comment.
Is this logic necessary? Do we actually care to reject or would we rather just resend? The other promise would just time out because we resent and the new one would be used.
But since we only have 1 request in flight at a time, that should not matter?
There was a problem hiding this comment.
I agree after coming back to this. I've simplified the logic just to abandon an existing query if a new query of the same type comes before the first one receive a response. With a single threaded client, this should only happen if the CAN message is lost.
|
|
||
| // Create a new promise and get its future | ||
| std::promise<MvecRelayQueryReply> promise; | ||
| std::promise<std::optional<MvecRelayQueryReply>> promise; |
There was a problem hiding this comment.
Why promise optional? That seems redundant?
| /// @brief Mutex protecting promise queues for thread safety | ||
| std::mutex promises_mutex_; | ||
| /// @brief Single-slot promise for population query response | ||
| std::optional<std::promise<std::optional<MvecPopulationReply>>> population_reply_promise_; |
There was a problem hiding this comment.
I don't know if you need optional here! Just don't fulfill the promise, the future will take care of not having a value!
… internally for each message response type.
5a97d6e to
6ac8c32
Compare
…pdated to just breaking promises for in-flight messages if their responses are not received.
Description:
The Problem:
The Solution:
std::optional<std::promise<T>>member variable per response type.std::future<T>to the caller.broken_promiseonget()future.wait_for()promises_mutex_with per-type mutexes (query_mutex_,command_mutex_,population_mutex_) so parse and request methods for different types don't block each otherAdditional Fixes:
staticfrom local vars in (get_last_fuse_status,get_last_relay_status,get_last_error_status): I don't think we want those to share state across different instances. (We don't really want different instances regardless).set_single_relay()inmvec_node.cppto handlebroken_promisefrom concurrent service calls (matches existing pattern inset_multi_relay())Tdirectly, notoptional<T>open()toopenSocket()andsetReceptionCallbacktosetOnReceiveCallback)Pros for this Method:
optionalwrapping on the future. Callers getstd::future<T>notstd::future<std::optional<T>>Cons for this Method:
broken_promiseexceptions if they send a new request of the same type while one is pendingTests:
mvec_relay_socketcan.cpp) covering: basic flow, abandon-and-resend, mismatched response, no-promise response drop, timeout, cross-type isolation, concurrent threading safety